Goto

Collaborating Authors

 Liverpool






Overview of the 17th International Joint Conference on Knowledge Discovery, Knowledge Engineering and Knowledge Management

Interactive AI Magazine

IC3K 2025 (17th International Joint Conference on Knowledge Discovery, Knowledge Engineering and Knowledge Management) received 163 paper submissions from 40 countries. To evaluate each submission, a double-blind paper review was performed by the Program Committee. After a stringent selection process, 31 papers were published and presented as full papers, i.e. completed work (12 pages/25' oral presentation), 81 papers were accepted as short papers (54 as oral presentation). The organizing committee included the IC3K Conference Chairs: Ricardo da Silva Torres, Artificial Intelligence Group, Wageningen University & Research, Netherlands and Jorge Bernardino, Polytechnic University of Coimbra, Portugal, and the IC3K 2025 Program Chairs: Le Gruenwald, University of Oklahoma, School of Computer Science, United States, Frans Coenen, University of Liverpool, United Kingdom, Jesualdo Tomás Fernández-Breis, University of Murcia, Spain, Lars Nolle, Jade University of Applied Sciences, Germany, Elio Masciari, University of Napoli Federico II, Italy and David Aveiro, University of Madeira, NOVA-LINCS and ARDITI, Portugal. At the closing session, the conference acknowledged a few papers that were considered excellent in their class, presenting a "Best Paper Award", "Best Student Paper Award", and "Best Poster Award" for each of the co-located conferences.


Conformal prediction for full and sparse polynomial chaos expansions

Hatstatt, A., Zhu, X., Sudret, B.

arXiv.org Machine Learning

Polynomial Chaos Expansions (PCEs) are widely recognized for their efficient computational performance in surrogate modeling. Yet, a robust framework to quantify local model errors is still lacking. While the local uncertainty of PCE prediction can be captured using bootstrap resampling, other methods offering more rigorous statistical guarantees are needed, especially in the context of small training datasets. Recently, conformal predictions have demonstrated strong potential in machine learning, providing statistically robust and model-agnostic prediction intervals. Due to its generality and versatility, conformal prediction is especially valuable, as it can be adapted to suit a variety of problems, making it a compelling choice for PCE-based surrogate models. In this contribution, we explore its application to PCE-based surrogate models. More precisely, we present the integration of two conformal prediction methods, namely the full conformal and the Jackknife+ approaches, into both full and sparse PCEs. For full PCEs, we introduce computational shortcuts inspired by the inherent structure of regression methods to optimize the implementation of both conformal methods. For sparse PCEs, we incorporate the two approaches with appropriate modifications to the inference strategy, thereby circumventing the non-symmetrical nature of the regression algorithm and ensuring valid prediction intervals. Our developments yield better-calibrated prediction intervals for both full and sparse PCEs, achieving superior coverage over existing approaches, such as the bootstrap, while maintaining a moderate computational cost.


Empirical Risk Minimization with $f$-Divergence Regularization

Daunas, Francisco, Esnaola, Iñaki, Perlaza, Samir M., Poor, H. Vincent

arXiv.org Machine Learning

In this paper, the solution to the empirical risk minimization problem with $f$-divergence regularization (ERM-$f$DR) is presented and conditions under which the solution also serves as the solution to the minimization of the expected empirical risk subject to an $f$-divergence constraint are established. The proposed approach extends applicability to a broader class of $f$-divergences than previously reported and yields theoretical results that recover previously known results. Additionally, the difference between the expected empirical risk of the ERM-$f$DR solution and that of its reference measure is characterized, providing insights into previously studied cases of $f$-divergences. A central contribution is the introduction of the normalization function, a mathematical object that is critical in both the dual formulation and practical computation of the ERM-$f$DR solution. This work presents an implicit characterization of the normalization function as a nonlinear ordinary differential equation (ODE), establishes its key properties, and subsequently leverages them to construct a numerical algorithm for approximating the normalization factor under mild assumptions. Further analysis demonstrates structural equivalences between ERM-$f$DR problems with different $f$-divergences via transformations of the empirical risk. Finally, the proposed algorithm is used to compute the training and test risks of ERM-$f$DR solutions under different $f$-divergence regularizers. This numerical example highlights the practical implications of choosing different functions $f$ in ERM-$f$DR problems.


The UK government is backing AI that can run its own lab experiments

MIT Technology Review

A competition calling for research projects involving so-called AI scientists shows just how fast this technology is moving. A number of startups and universities that are building "AI scientists" to design and run experiments in the lab, including robot biologists and chemists, have just won extra funding from the UK government agency that funds moonshot R&D. The competition, set up by ARIA (the Advanced Research and Invention Agency), gives a clear sense of how fast this technology is moving: The agency received 245 proposals from research teams that are already building tools capable of automating increasing amounts of lab work. ARIA defines an AI scientist as a system that can run an entire scientific workflow, coming up with hypotheses, designing and running experiments to test those hypotheses, and then analyzing the results. In many cases, the system may then feed those results back into itself and run the loop again and again. Human scientists become overseers, coming up with the initial research questions and then letting the AI scientist get on with the grunt work.



Multi-Agent Deep Reinforcement Learning for Collaborative UAV Relay Networks under Jamming Atatcks

Nguyen, Thai Duong, Nguyen, Ngoc-Tan, Nguyen, Thanh-Dao, Van Huynh, Nguyen, Tran, Dinh-Hieu, Chatzinotas, Symeon

arXiv.org Artificial Intelligence

The deployment of Unmanned Aerial Vehicle (UAV) swarms as dynamic communication relays is critical for next-generation tactical networks. However, operating in contested environments requires solving a complex trade-off, including maximizing system throughput while ensuring collision avoidance and resilience against adversarial jamming. Existing heuristic-based approaches often struggle to find effective solutions due to the dynamic and multi-objective nature of this problem. This paper formulates this challenge as a cooperative Multi-Agent Reinforcement Learning (MARL) problem, solved using the Centralized Training with Decentralized Execution (CTDE) framework. Our approach employs a centralized critic that uses global state information to guide decentralized actors which operate using only local observations. Simulation results show that our proposed framework significantly outperforms heuristic baselines, increasing the total system throughput by approximately 50% while simultaneously achieving a near-zero collision rate. A key finding is that the agents develop an emergent anti-jamming strategy without explicit programming. They learn to intelligently position themselves to balance the trade-off between mitigating interference from jammers and maintaining effective communication links with ground users.